41 research outputs found

    Senyals i sistemes : de l'ensenyament a l'aprenentatge

    Get PDF
    En aquest article es presenta una experi猫ncia docent en el camp de les enginyeries i mes en concret en una assignatura de caire molt matem脿tic, Senyals i Sistemes. Les caracter铆stiques addicionals que presenta l'assignatura es que t茅 un nombre alt d'alumnes (m茅s de 125) per grup, amb les metodologies tradicionals presentava un alt 铆ndex de suspesos i era gaireb茅 impossible valorar el treball de l'alumne. Per portat a terme aquesta innovaci贸 s'ha canviat l'organitzaci贸 i gesti贸 de l'assignatura, les activitats docents que es duen a terme, l'avaluaci贸 dels objectius que es proposen a l'assignatura i per a avaluar l'experi猫ncia es realitza una comparaci贸 dels resultats que obten铆em abans i despr茅s d'aplicar les noves metodologies docents. Aquest canvi metodol貌gic ens permet, d'altra banda, valorar el treball que fa l'alumne i tindre una realimentaci贸 a partir de les activitats que ens permetr脿 fer una avaluaci贸 formativa i con猫ixer el grau de satisfacci贸 i implicaci贸 de l'alumne.In this paper we will present a teaching experience in the field of engineering and, more concretely, in a very mathematical subject, Signals and Systems. An additional characteristic to this subject is its high number of students (more than 125) per group, which before, with traditional teaching methods, presented a high index of student failure and made it practically impossible to value student work. We will comment on the organization of the class, the teaching activities that are used, the evaluation of the goals that are proposed in the subject and finally, there will be a comparison between the results obtained before and after having applied new teaching methods. This methodological change allows us, on the other hand, to value student work and to get feedback through activities that will allow us to make a formative evaluation and learn the degree of satisfaction and implication among the students

    Nova metodologia per a la doc猫ncia de projectes fi de carrera basada en grups de debat i cooperaci贸

    Get PDF
    En aquest treball s'analitza un canvi metodol貌gic durant la realitzaci贸 i seguiment del Projectes de Fi de Carrera (PFC), que permeti un seguiment actiu, afavoreixi el treball en grups col路laboratius, la avaluaci贸 formativa, amb l'objectiu de millorar la qualitat i els resultats acad猫mics dels Projectes de Fi de Carrera. Es proposa un canvi en la metodologia usada per a la direcci贸 dels PFC, de manera que existeixi un treball individual i en grup, una supervisi贸 i avaluaci贸 continuada del treball realitzat per l'estudiant i una visi贸 docent i professional del projecte. La proposta es concreta en fer un seguiment del projecte mitjan莽ant l'assist猫ncia a reunions peri貌diques de grup per la presentaci贸 i debat del treball realitzat a cada fase del projecte. A cada reuni贸 es fa una introducci贸 per part del professor responsable sobre el tema del dia, a continuaci贸 una exposici贸 breu per part de cada alumne a on es presentar脿 el treball realitzat i la planificaci贸 futura i es debatr脿 entre tots, finalment es faran les conclusions finals i s'establiran els objectius per la propera reuni贸. A mes a mes, cada alumne entrega un comentari escrit sobre les exposicions que s'han fet a la reuni贸, valorant els punts forts i punts febles de cadascuna. Finalment, per cada reuni贸 es fa una entrega via campus virtual de les diapositives de la presentaci贸 del tema de cada reuni贸 i un document escrit de la fase de la reuni贸 anterior amb les correccions o comentaris incorporats. Addicionalment, es fan tutories individuals per tractat temes o dubtes espec铆fics de cada projecte

    Defining Asymptotic Parallel Time Complexity of Data-dependent Algorithms

    Get PDF
    The scientific research community has reached a stage of maturity where its strong need for high-performance computing has diffused into also everyday life of engineering and industry algorithms. In efforts to satisfy this need, parallel computers provide an efficient and economical way to solve large-scale and/or time-constrained problems. As a consequence, the end-users of these systems have a vested interest in defining the asymptotic time complexity of parallel algorithms to predict their performance on a particular parallel computer. The asymptotic parallel time complexity of data-dependent algorithms depends on the number of processors, data size, and other parameters. Discovering the main other parameters is a challenging problem and the clue in obtaining a good estimate of performance order. Great examples of these types of applications are sorting algorithms, searching algorithms and solvers of the traveling salesman problem (TSP). This article encompasses all the knowledge discovery aspects to the problem of defining the asymptotic parallel time complexity of datadependent algorithms. The knowledge discovery methodology begins by designing a considerable number of experiments and measuring their execution times. Then, an interactive and iterative process explores data in search of patterns and/or relationships detecting some parameters that affect performance. Knowing the key parameters which characterise time complexity, it becomes possible to hypothesise to restart the process and to produce a subsequent improved time complexity model. Finally, the methodology predicts the performance order for new data sets on a particular parallel computer by replacing a numerical identification. As a case of study, a global pruning traveling salesman problem implementation (GP-TSP) has been chosen to analyze the influence of indeterminism in performance prediction of data-dependent parallel algorithms, and also to show the usefulness of the defined knowledge discovery methodology. The subsequent hypotheses generated to define the asymptotic parallel time complexity of the TSP were corroborated one by one. The experimental results confirm the expected capability of the proposed methodology; the predictions of performance time order were rather good comparing with real execution time (in the order of 85%)

    Support managing population aging stress of emergency departments in a computational way

    Get PDF
    Agra茂ments "Partially supported by a grant from the China Scholarship Council (CSC) under reference number: 2013062900.Old people usually have more complex health problems and use healthcare services more frequently than young people. It is obvious that the increasing old people both in number and proportion will challenge the emergency departments (ED). This paper firstly presents a way to quantitatively predict and explain this challenge by using simulation techniques. Then, we outline the capability of simulation for decision support to overcome this challenge. Specifically, we use simulation to predict and explain the impact of population aging over an ED. In which, a precise ED simulator which has been validated for a public hospital ED will be used to predict the behavior of an ED under population aging in the next 15 years. Our prediction shows that the stress of population aging to EDs can no longer be ignored and ED upgrade must be carefully planned. Based on this prediction, the cost and benefits of several upgrade proposals are evaluated

    Virtual Clinical Trials : A tool for the Study of Transmission of Nosocomial Infections

    Get PDF
    A clinical trial is a study designed to demonstrate the efficacy and safety of a drug, procedure, medical device, or diagnostic test. Since clinical trials involve research in humans, they must be carefully designed and must comply strictly with a set of ethical conditions. Logistical disadvantages, ethical constraints, costs and high execution times could have a negative impact on the execution of the clinical trial. This article proposes the use of a simulation tool, the MRSA-T-Simulator, to design and perform "virtual clinical trials" for the purpose of studying MRSA contact transmission among hospitalized patients. The main advantage of the simulator is its flexibility when it comes to configuring the patient population, healthcare staff and the simulation environment

    An Intelligent Scheduling of Non-Critical Patients Admission for Emergency Department

    Get PDF
    The combination of the progressive growth of an aging population, increased life expectancy and a greater number of chronic diseases all contribute significantly to the growing demand for emergency medical care, and thus, causing saturation in Emergency Departments (EDs). This saturation is usually due to the admission of non-urgent patients, who constitute a high percentage of patients in an ED. The Agent-based Model (ABM) is one of the most important tools that helps to study complex systems and explores the emergent behavior of this type of department. Its simulation more accurately reflects the complexity of the operation of real systems. Our proposal is the design of an ABM to schedule the access of these non-critical patients into an ED, which can be useful for the service management dealing with the actual growing demand for emergency care. We suppose that a relocation of these non-critical patients within the expected input pattern, provided initially by historical records, enables a reduction in waiting time for all patients, and therefore, it will lead to an improvement in the quality of service. It would also allow us to avoid long waiting times. This research offers the availability of relevant knowledge for Emergency Department managers in order to help them make decisions to improve the quality of the service, in anticipation of the expected growing demand of the service in the very near future

    Scalable performance analysis method for SPMD applications

    Get PDF
    Altres ajuts: acord transformatiu CRUE-CSICThe analysis of parallel scientific applications allows us to understand their computational and communication behavior. One way of obtaining performance information is through performance tools. One such tool is parallel application signatures for performance prediction (PAS2P), based on parallel application repeatability, focusing on performance analysis and prediction. The same resources that execute the parallel application are used to perform its analysis, creating a machine independent model of the application and identifying its common patterns. However, the analysis is costly in terms of execution time due to the high number of synchronization communications performed by PAS2P, degrading performance as the number of processes increases. To solve this problem, we propose a model that reduces data dependency between processes, reducing the number of communications performed by PAS2P in the analysis stage and taking advantage of the characteristics of single program, multiple sata applications. Our analysis proposal allows us to decrease the analysis time by 29 times when the application scales to 256 processes, while keeping error levels below 11% in the runtime prediction. It is important to mention that the analysis time is not considerably affected by increasing the number of application processes

    Fault tolerance at system level based on RADIC architecture

    Get PDF
    The increasing failure rate in High Performance Computing encourages the investigation of fault tolerance mechanisms to guarantee the execution of an application in spite of node faults. This paper presents an automatic and scalable fault tolerant model designed to be transparent for applications and for message passing libraries. The model consists of detecting failures in the communication socket caused by a faulty node. In those cases, the affected processes are recovered in a healthy node and the connections are reestablished without losing data. The Redundant Array of Distributed Independent Controllers architecture proposes a decentralized model for all the tasks required in a fault tolerance system: protection, detection, recovery and masking. Decentralized algorithms allow the application to scale, which is a key property for current HPC system. Three different rollback recovery protocols are defined and discussed with the aim of offering alternatives to reduce overhead when multicore systems are used. A prototype has been implemented to carry out an exhaustive experimental evaluation through Master/Worker and Single Program Multiple Data execution models. Multiple workloads and an increasing number of processes have been taken into account to compare the above mentioned protocols. The executions take place in two multicore Linux clusters with different socket communications libraries

    Evaluation of response capacity to patient attention demand in an Emergency Department

    Get PDF
    The progressive growth of aging, increased life expectancy and a greater number of chronic diseases contribute significantly to the growing demand of emergency medical care, and thus, on saturation of Emergency Departments. This is one of the most important current problems in healthcare systems worldwide. This work proposes an analytical model to calculate the theoretical throughput of a particular sanitary staff configuration in a Hospital Emergency Department, which is, the number of patients it can attend per unit time given its composition. The analytical model validation is based on data generated by simulation of the real system, based on an agent based model of the system, which makes it possible to take into account different valid sanitary staff configurations and different number of patients entering the emergency service. In fact, we aim to evaluate the response capacity of an ED, specifically of doctors, nurses, admission and triage personnel, who make up a specific sanitary staff configuration, for any possible configuration, according to the patient flow throughout the service. It would not be possible to test the different possible situations in the real system and this is the main reason why we obtain the necessary information about the system performance for the validation of the model using a simulator as a sensor of the real system. The theoretical throughput is a measure of the response capacity to patient's attention in the system and, moreover, it will be a reference in order to make possible a model for planning the entry of non-critical patients into the service by its relocation in the current input pattern, which is an immediate future goal in our current research. This research offers the availability of relevant knowledge to the managers of the Emergency Departments to make decisions to improve the quality of the service in anticipation of the expected growing demand of the service in the very near future

    Predicting robustness against transient faults of MPI based programs

    Get PDF
    The evaluation of a program's behaviour in the presence of transient faults is often a very time consuming work. In order to achieve significant data, thousands of executions are required and each execution will have the significant overhead of the fault injection environment. A previously published methodology reduced significantly the time needed to evaluate the robustness of a program execution by exhaustively analysing its execution trace instead of using fault injection. In this paper we present a further improvement in the evaluation time of parallel programs robustness against transient faults by combining this methodology with PAS2P - a method that strives to describe an application based on its message-passing activity. This combination allowed us to predict the robustness of larger parallel programs, reducing in some cases by more than 20 times the time needed to calculate the robustness while obtaining a robustness prediction error of less than 4%
    corecore